skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Wang, Yufei"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Atomic force microscopy (AFM), in particular force spectroscopy, is a powerful tool for understanding the supramolecular structures associated with polymers grafted to surfaces, especially in regimes of low polymer density where different morphological structures are expected. In this study, we utilize force volume mapping to characterize the nanoscale surfaces of Ag nanocubes (AgNCs) grafted with a monolayer of polyethylene glycol (PEG) chains. Spatially resolved force−distance curves taken for a single AgNC were used to map surface properties, such as adhesion energy and deformation. We confirm the presence of surface octopus micelles that are localized on the corners of the AgNC, using force curves to resolve structural differences between the micelle “bodies” and “legs”. Furthermore, we observe unique features of this system including a polymer corona stemming from AgNC−substrate interactions and polymer bridging stemming from particle−particle interactions. 
    more » « less
  2. While deep networks have achieved broad success in analyzing natural images, when applied to medical scans, they often fail in unexpected situations. This study investigates model sensitivity to domain shifts, such as data sampled from different hospitals or confounded by demographic variables like sex and race, focusing on chest X-rays and skin lesion images. The key finding is that existing visual backbones lack an appropriate prior for reliable generalization in these settings. Inspired by medical training, the authors propose incorporating explicit medical knowledge communicated in natural language into deep networks. They introduce Knowledge-enhanced Bottlenecks (KnoBo), a class of concept bottleneck models that integrate knowledge priors, enabling reasoning with clinically relevant factors found in medical textbooks or PubMed. KnoBo utilizes retrieval-augmented language models to design an appropriate concept space, paired with an automatic training procedure for recognizing these concepts. Evaluations across 20 datasets demonstrate that KnoBo outperforms fine-tuned models on confounded datasets by 32.4% on average. Additionally, PubMed is identified as a promising resource for enhancing model robustness to domain shifts, outperforming other resources in both information diversity and prediction performance. 
    more » « less
    Free, publicly-accessible full text available December 10, 2025
  3. Robot-assisted dressing could profoundly enhance the quality of life of adults with physical disabilities. To achieve this, a robot can benefit from both visual and force sensing. The former enables the robot to ascertain human body pose and garment deformations, while the latter helps maintain safety and comfort during the dressing process. In this paper, we introduce a new technique that leverages both vision and force modalities for this assistive task. Our approach first trains a vision-based dressing policy using reinforcement learning in simulation with varying body sizes, poses, and types of garments. We then learn a force dynamics model for action planning to ensure safety. Due to limitations of simulating accurate force data when deformable garments interact with the human body, we learn a force dynamics model directly from real-world data. Our proposed method combines the vision-based policy, trained in simulation, with the force dynamics model, learned in the real world, by solving a constrained optimization problem to infer actions that facilitate the dressing process without applying excessive force on the person. We evaluate our system in simulation and in a real-world human study with 10 participants across 240 dressing trials, showing it greatly outperforms prior baselines. 
    more » « less
  4. Reward engineering has long been a challenge in Reinforcement Learning (RL) research, as it often requires extensive human effort and iterative processes of trial-and-error to design effective reward functions. In this paper, we propose RL-VLM-F, a method that automatically generates reward functions for agents to learn new tasks, using only a text description of the task goal and the agent's visual observations, by leveraging feedbacks from vision language foundation models (VLMs). The key to our approach is to query these models to give preferences over pairs of the agent's image observations based on the text description of the task goal, and then learn a reward function from the preference labels, rather than directly prompting these models to output a raw reward score, which can be noisy and inconsistent. We demonstrate that RL-VLM-F successfully produces effective rewards and policies across various domains - including classic control, as well as manipulation of rigid, articulated, and deformable objects - without the need for human supervision, outperforming prior methods that use large pretrained models for reward generation under the same assumptions. 
    more » « less
  5. We present RoboGen, a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation. RoboGen leverages the latest advancements in foundation and generative models. Instead of directly adapting these models to produce policies or low-level actions, we advocate for a generative scheme, which uses these models to automatically generate diversified tasks, scenes, and training supervisions, thereby scaling up robotic skill learning with minimal human supervision. Our approach equips a robotic agent with a self-guided propose-generate-learn cycle: the agent first proposes interesting tasks and skills to develop, and then generates simulation environments by populating pertinent assets with proper spatial configurations. Afterwards, the agent decomposes the proposed task into sub-tasks, selects the optimal learning approach (reinforcement learning, motion planning, or trajectory optimization), generates required training supervision, and then learns policies to acquire the proposed skill. Our fully generative pipeline can be queried repeatedly, producing an endless stream of skill demonstrations associated with diverse tasks and environments. 
    more » « less
  6. Abstract Checkerboard lattices—where the resulting structure is open, porous, and highly symmetric—are difficult to create by self-assembly. Synthetic systems that adopt such structures typically rely on shape complementarity and site-specific chemical interactions that are only available to biomolecular systems (e.g., protein, DNA). Here we show the assembly of checkerboard lattices from colloidal nanocrystals that harness the effects of multiple, coupled physical forces at disparate length scales (interfacial, interparticle, and intermolecular) and that do not rely on chemical binding. Colloidal Ag nanocubes were bi-functionalized with mixtures of hydrophilic and hydrophobic surface ligands and subsequently assembled at an air–water interface. Using feedback between molecular dynamics simulations and interfacial assembly experiments, we achieve a periodic checkerboard mesostructure that represents a tiny fraction of the phase space associated with the polymer-grafted nanocrystals used in these experiments. In a broader context, this work expands our knowledge of non-specific nanocrystal interactions and presents a computation-guided strategy for designing self-assembling materials. 
    more » « less